6 research outputs found

    A Performance-Explainability-Fairness Framework For Benchmarking ML Models

    Get PDF
    Machine learning (ML) models have achieved remarkable success in various applications; however, ensuring their robustness and fairness remains a critical challenge. In this research, we present a comprehensive framework designed to evaluate and benchmark ML models through the lenses of performance, explainability, and fairness. This framework addresses the increasing need for a holistic assessment of ML models, considering not only their predictive power but also their interpretability and equitable deployment. The proposed framework leverages a multi-faceted evaluation approach, integrating performance metrics with explainability and fairness assessments. Performance evaluation incorporates standard measures such as accuracy, precision, and recall, but extends to overall balanced error rate, overall area under the receiver operating characteristic (ROC) curve (AUC), to capture model behavior across different performance aspects. Explainability assessment employs state-of-the-art techniques to quantify the interpretability of model decisions, ensuring that model behavior can be understood and trusted by stakeholders. The fairness evaluation examines model predictions in terms of demographic parity, equalized odds, thereby addressing concerns of bias and discrimination in the deployment of ML systems. To demonstrate the practical utility of the framework, we apply it to a diverse set of ML algorithms across various functional domains, including finance, criminology, education, and healthcare prediction. The results showcase the importance of a balanced evaluation approach, revealing trade-offs between performance, explainability, and fairness that can inform model selection and deployment decisions. Furthermore, we provide insights into the analysis of tradeoffs in selecting the appropriate model for use cases where performance, interpretability and fairness are important. In summary, the Performance-Explainability-Fairness Framework offers a unified methodology for evaluating and benchmarking ML models, enabling practitioners and researchers to make informed decisions about model suitability and ensuring responsible and equitable AI deployment. We believe that this framework represents a crucial step towards building trustworthy and accountable ML systems in an era where AI plays an increasingly prominent role in decision-making processes

    Study of Speech Synthesis Systems Towards Implementation of a Multilingual Text-To-Speech System

    No full text
    This work researches the state-of-the-art Text-To-Speech (TTS) systems through a comprehensive study of the technological evolution of different TTS systems. We also studied several speech synthesis approaches, available tools for TTS systems with an objective to identify their applicability towards development of a Bangla TTS system. To our knowledge very little or no work has been done in this area. This work is organized in two phases. First, a comparative survey of different TTS systems was conducted to evaluate their potential to be used for a Bangla TTS system and therefore approaching a multi-lingual ITS system. It is hypothesized that the use of existing TTS systems in other languages cannot be used for implementation of a Bangla System. Several distinguishing characteristics of Bangla sound were identified that make it necessary to build a Bangla TTS framework. In the second phase, the development of a preliminary framework is underway. Initial experiments are being conducted to test the output quality by simple concatenation of elementary speech units defined by us

    Explainable Artificial Intelligence in the Medical Domain: A Systematic Review

    Get PDF
    The applications of Artificial Intelligence (AI) and Machine Learning (ML) techniques in different medical fields is rapidly growing. AI holds great promise in terms of beneficial, accurate and effective preventive and curative interventions. At the same time, there is also concerns regarding potential risks, harm and trust issues arising from the opacity of some AI algorithms because of their un-explainability. Overall, how can the decisions from these AI-based systems be trusted if the decision-making logic cannot be properly explained? Explainable Artificial Intelligence (XAI) tries to shed light to these questions. We study the recent development on this topic within the medical domain. The objective of this study is to provide a systematic review of the methods and techniques of explainable AI within the medical domain as observed within the literature while identifying future research opportunities

    Towards a Performance-explainability-fairness Framework for Benchmarking ML Models

    Get PDF
    Artificial Intelligence (AI) holds great promise in beneficial, accurate, and effective predictive and real-time decision-making in a wide range of use cases. However, there are concerns regarding potential risks, harm, trust, and fairness issues arising from some AI algorithms\u27 opacity and potential unfairness because of their un-explainability and concern with objectivity. This study proposes a framework for evaluating a machine learning model that incorporates explainability for AI fairness as currently, no such framework exists. We evaluate its applicability with a classification problem using multiple classifiers. The experimental case study demonstrates the successful application of the performance-explainability-fairness framework to the classification problem. The framework can guide means for improving fairness in machine learning models

    Fairness Challenges in Artificial Intelligence

    No full text
    Fairness is a highly desirable human value in day-to-day decisions that affect human life. In recent years many successful applications of AI systems have been developed, and increasingly, AI methods are becoming part of many new applications for decision-making tasks that were previously carried out by human beings. Questions have been raised 1) can the decision be trusted? 2) is it fair? Overall, are the AI-based systems making fair decisions, or are they increasing the unfairness in society? This chapter presents a systematic literature review (SLR) of existing works on AI fairness challenges. Towards this end, a conceptual bias mitigation framework for organizing and discussing AI fairness-related research is developed and presented. The systematic review provides a mapping of the AI fairness challenges to components of a proposed framework based on the suggested solutions within the literature. Future research opportunities are also identified
    corecore